-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/support enumeration type hints #37
Conversation
Subclasses of enum.Enum can now be be used as type hints
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing I've realized is the following.
from enum import Enum
class IntEnum(Enum):
A = 1
B = 2
IntEnum.A == 1 # False
Which means that when the LLM calls a function which accepts a type of IntEnum
, it will fail because IntEnum.A
is not the same as 1
, for example.
To me there are two possible options:
- We could warn the user in the documentation that they need to deal with this. Not ideal but the fix would just be to add
IntEnum(1)
. - We could wrap the provided function to do the conversion ourselves. This would obviously be more convenient, but wonder if that introduces other problems.
What do you think @lorenzocelli
Right. I agree with you, the first option is not ideal. I think we could implement the second more easily if we stored the list of def __call__(self, *args, **kwargs):
p_schemas: list[ParameterSchema] = self.schema.parameters
for schema in p_schemas:
if isinstance(schema, ListParameterSchema):
# replace value with corresponding enum type
kwargs[schema.parameter.name] = IntEnum(kwargs[schema.parameter.name])
return self.func(*args, **kwargs) We could also delegate to |
Yeah I think that sounds good! |
Add support for literal and enum type hints and related tests
We're still missing the value to enum conversion on invocation |
@lorenzocelli I've marked this as a draft. Let me know when it's ready for me to review 👍 |
@siliconlad when we convert the enum to a set of values to send to the api, do you think we should keep the values or the names? Names are usually more informative than values, such as in this example from the python docs: class Color(Enum):
RED = 1
GREEN = 2
BLUE = 3 |
…lues on invocation
In order to store the parameter schemas inside the function schema, I found it convenient to remove the static methods in favour of a more instance-oriented approach. For example Another way could be to keep the static methods and provide them with a |
The LLM might perform better with the names I think. |
I suspected, I will mark this as a draft again then |
Convert Enum type hints to names instead of values in the parameters schema
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of comments :)
(some conflicts to resolve too)
Store parameter_schemas on initialisation but do not create the schema dictionary until to_json is called
Change the internal logic of ParameterSchema to go from _add_...(dict) methods to _get_...(), for similarity with FunctionSchema
Refactored |
GPT-enabled functions can now be invoked with json-converted values as positional arguments
Add support for keyword arguments of type enum.Enum with a default value. Rename ParameterSchema.parse_value to decode_value, and add encode_value method to encode the default value when necessary.
To support |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of small comments. Otherwise looks good 🚀
Use dictionary comprehension in ParameterSchema.to_json and check the value type in EnumTypeParameterSchema.decode_value
I updated the 'enumerations' section of the README |
Co-authored-by: Angus Stewart <siliconlad@protonmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Issue #10